Nonlinear rescaling vs. smoothing technique in convex optimization

نویسنده

  • Roman A. Polyak
چکیده

We introduce an alternative to the smoothing technique approach for constrained optimization. As it turns out for any given smoothing function there exists a modification with particular properties. We use the modification for Nonlinear Rescaling (NR) the constraints of a given constrained optimization problem into an equivalent set of constraints. The constraints transformation is scaled by a vector of positive parameters. The Lagrangian for the equivalent problems is to the correspondent Smoothing Penalty functions as Augmented Lagrangian to the Classical Penalty function or MBFs to the Barrier Functions. Moreover the Lagrangians for the equivalent problems combine the best properties of Quadratic and Nonquadratic Augmented Lagrangians and at the same time are free from their main drawbacks. Sequential unconstrained minimization of the Lagrangian for the equivalent problem in primal space followed by both Lagrange multipliers and scaling parameters update leads to a new class of NR multipliers methods, which are equivalent to the Interior Quadratic Prox methods for the dual problem. We proved convergence and estimate the rate of convergence of the NR multipliers method under very mild assumptions on the input data. We also estimate the rate of convergence under various assumptions on the input data. In particular, under the standard second order optimality conditions the NR method converges with Qlinear rate without unbounded increase of the scaling parameters, which correspond to the active constraints. We also established global quadratic convergence of the NR methods for Linear Programming with unique dual solution. We provide numerical results, which strongly support the theory.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Solution of network localization problem with noisy distances and its convergence

The network localization problem with convex and non-convex distance constraints may be modeled as a nonlinear optimization problem. The existing localization techniques are mainly based on convex optimization. In those techniques, the non-convex distance constraints are either ignored or relaxed into convex constraints for using the convex optimization methods like SDP, least square approximat...

متن کامل

Primal-dual nonlinear rescaling method with dynamic scaling parameter update

In this paper we developed a general primal-dual nonlinear rescaling method with dynamic scaling parameter update (PDNRD) for convex optimization. We proved the global convergence, established 1.5Q-superlinear rate of convergence under the standard second order optimality conditions. The PDNRD was numerically implemented and tested on a number of nonlinear problems from COPS and CUTE sets. We p...

متن کامل

Proximal Point Nonlinear Rescaling Method for Convex Optimization

Nonlinear rescaling (NR) methods alternate finding an unconstrained minimizer of the Lagrangian for the equivalent problem in the primal space (which is an infinite procedure) with Lagrange multipliers update. We introduce and study a proximal point nonlinear rescaling (PPNR) method that preserves convergence and retains a linear convergence rate of the original NR method and at the same time d...

متن کامل

Nonlinear Rescaling in discrete minimax

We present a general Nonlinear Rescaling (NR) methods for discrete minimax problem. The fundamental difference between the NR approach and the smoothing technique consists of using the Lagrange multipliers as the main driving force to improve the convergence rate and the numerical stability. In contrast to the smoothing technique the NR methods converge to the primal-dual solution under a fixed...

متن کامل

Epi-convergent Smoothing with Applications to Convex Composite Functions

Smoothing methods have become part of the standard tool set for the study and solution of nondifferentiable and constrained optimization problems as well as a range of other variational and equilibrium problems. In this note we synthesize and extend recent results due to Beck and Teboulle on infimal convolution smoothing for convex functions with those of X. Chen on gradient consistency for non...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • Math. Program.

دوره 92  شماره 

صفحات  -

تاریخ انتشار 2002